Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Despite being a promising approach, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, Preference-based Recommender systems (PrefRec), which allows RL recommender systems to learn from preferences about users' historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. Extensive experiments are conducted on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.
translated by 谷歌翻译
The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at https://github.com/XuZhengzhuo/LiVT.
translated by 谷歌翻译
Accurately predicting interactive road agents' future trajectories and planning a socially compliant and human-like trajectory accordingly are important for autonomous vehicles. In this paper, we propose a planning-centric prediction neural network, which takes surrounding agents' historical states and map context information as input, and outputs the joint multi-modal prediction trajectories for surrounding agents, as well as a sequence of control commands for the ego vehicle by imitation learning. An agent-agent interaction module along the time axis is proposed in our network architecture to better comprehend the relationship among all the other intelligent agents on the road. To incorporate the map's topological information, a Dynamic Graph Convolutional Neural Network (DGCNN) is employed to process the road network topology. Besides, the whole architecture can serve as a backbone for the Differentiable Integrated motion Prediction with Planning (DIPP) method by providing accurate prediction results and initial planning commands. Experiments are conducted on real-world datasets to demonstrate the improvements made by our proposed method in both planning and prediction accuracy compared to the previous state-of-the-art methods.
translated by 谷歌翻译
Tensor program tuning is a non-convex objective optimization problem, to which search-based approaches have proven to be effective. At the core of the search-based approaches lies the design of the cost model. Though deep learning-based cost models perform significantly better than other methods, they still fall short and suffer from the following problems. First, their feature extraction heavily relies on expert-level domain knowledge in hardware architectures. Even so, the extracted features are often unsatisfactory and require separate considerations for CPUs and GPUs. Second, a cost model trained on one hardware platform usually performs poorly on another, a problem we call cross-hardware unavailability. In order to address these problems, we propose TLP and MTLTLP. TLP is a deep learning-based cost model that facilitates tensor program tuning. Instead of extracting features from the tensor program itself, TLP extracts features from the schedule primitives. We treat schedule primitives as tensor languages. TLP is thus a Tensor Language Processing task. In this way, the task of predicting the tensor program latency through the cost model is transformed into a natural language processing (NLP) regression task. MTL-TLP combines Multi-Task Learning and TLP to cope with the cross-hardware unavailability problem. We incorporate these techniques into the Ansor framework and conduct detailed experiments. Results show that TLP can speed up the average search time by 9.1X and 3.0X on CPU and GPU workloads, respectively, compared to the state-of-the-art implementation. MTL-TLP can achieve a speed-up of 4.7X and 2.9X on CPU and GPU workloads, respectively, using only 7% of the target hardware data.
translated by 谷歌翻译
从搜索效率中受益,可区分的神经体系结构搜索(NAS)已发展为自动设计竞争性深神经网络(DNNS)的最主要替代品。我们注意到,必须在现实世界中严格的性能限制下执行DNN,例如,自动驾驶汽车的运行时间延迟。但是,要获得符合给定性能限制的体系结构,先前的硬件可区分的NAS方法必须重复多次搜索运行,以通过反复试验和错误手动调整超参数,因此总设计成本会成比例地增加。为了解决这个问题,我们引入了一个轻巧的硬件可区分的NAS框架,称为lightnas,努力找到所需的架构,通过一次性搜索来满足各种性能约束(即,\ \ suesperline {\ textIt {您只搜索一次}})) 。进行了广泛的实验,以显示LINDNA的优越性,而不是先前的最新方法。
translated by 谷歌翻译
复杂的流量分析,例如加密的流量分析和未知的恶意软件检测,强调需要进行高级方法来分析网络流量。使用固定模式,签名匹配和检测网络流量中已知模式的规则的传统方法已被AI(人工智能)驱动算法取代。但是,缺乏高性能AI网络特定的框架使得不可能在网络工作负载中部署基于AI的实时处理。在本文中,我们描述了流量分析开发工具包(TADK)的设计,这是一个针对基于AI的网络工作负载处理的行业标准框架。 TADK可以在数据中心到边缘的网络设备中基于实时的AI网络工作负载处理,而无需专门硬件(例如GPU,神经处理单元等)。我们已经在商品WAF和5G UPF中部署了TADK,评估结果表明,Tadk可以在流量功能提取时达到每个核心最多35.3Gbps的吞吐量,每核6.5Gbps在流量分类中,并且可以减少SQLI/XSS检测到下降至4.5us每个请求的精度比固定模式解决方案更高。
translated by 谷歌翻译
在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
宫颈异常细胞检测是一项具有挑战性的任务,因为异常细胞和正常细胞之间的形态差异通常是微妙的。为了确定宫颈细胞是正常还是异常,细胞病理学家总是将周围细胞作为参考,并进行仔细比较以鉴定其异常。为了模仿这些临床行为,我们建议探索上下文关系,以提高宫颈异常细胞检测的性能。具体而言,利用细胞和细胞到全球图像之间的上下文关系,以增强每个感兴趣区域(ROI)建议的特征。因此,开发了两个模块,称为ROI关系注意模块(RRAM)和全球ROI注意模块(GRAM),还研究了它们的组合策略。我们通过使用特征金字塔网络(FPN)使用单头或双头更快的R-CNN来设置强基础,并将我们的RRAM和革兰氏集整合到它们中以验证提出的模块的有效性。由40,000个细胞学图像组成的大宫颈细胞检测数据集进行的实验表明,RRAM和GRAM的引入都比基线方法获得了更好的平均精度(AP)。此外,当级联RRAM和GRAM时,我们的方法优于最先进的方法(SOTA)方法。此外,我们还显示了提出的功能增强方案可以促进图像级别和涂片级别的分类。代码和训练有素的模型可在https://github.com/cviu-csu/cr4cacd上公开获得。
translated by 谷歌翻译
实现自动驾驶汽车(AV)的人级安全性能仍然是一个挑战。一个关键的瓶颈是所谓的“长尾挑战”,通常是指AVS应该能够处理看似无限的低概率安全性安全驾驶场景的问题,即使在公共场所积累了数百万个测试里程道路。但是,既没有严格的定义,也没有对此类问题的属性进行分析,这阻碍了解决问题的进展。在本文中,我们系统地分析了“长尾挑战”,并提出了“稀有诅咒”(COR)的概念。我们得出的结论是,由于在驾驶环境的高维度中,安全至关重要事件的稀有性,COR对“维度诅咒”(COD)(COD)的基础是“长尾挑战”的根本原因。我们在AV开发的各个方面讨论COR,包括感知,预测和计划以及验证和验证。基于这些分析和讨论,我们提出了潜在的解决方案来解决COR,以加速AV开发和部署。
translated by 谷歌翻译
现有的RGB-D SOD方法主要依赖于对称的两个基于CNN的网络来分别提取RGB和深度通道特征。但是,对称传统网络结构有两个问题:首先,CNN在学习全球环境中的能力是有限的。其次,对称的两流结构忽略了模态之间的固有差异。在本文中,我们提出了一个基于变压器的非对称网络(TANET),以解决上述问题。我们采用了变压器(PVTV2)的强大功能提取能力,从RGB数据中提取全局语义信息,并设计轻巧的CNN骨架(LWDEPTHNET),以从深度数据中提取空间结构信息,而无需预训练。不对称混合编码器(AHE)有效地减少了模型中参数的数量,同时不牺牲性能而增加速度。然后,我们设计了一个跨模式特征融合模块(CMFFM),该模块增强并互相融合了RGB和深度特征。最后,我们将边缘预测添加为辅助任务,并提出一个边缘增强模块(EEM)以生成更清晰的轮廓。广泛的实验表明,我们的方法在六个公共数据集上实现了超过14种最先进的RGB-D方法的卓越性能。我们的代码将在https://github.com/lc012463/tanet上发布。
translated by 谷歌翻译